实际数据集中不可避免地有许多错误标记的数据。由于深度神经网络(DNNS)具有记忆标签的巨大能力,因此需要强大的训练方案来防止标签错误降低DNN的概括性能。当前的最新方法提出了一种共同训练方案,该方案使用与小损失相关的样本训练双网络。但是,实际上,培训两个网络可以同时负担计算资源。在这项研究中,我们提出了一种简单而有效的健壮培训计划,该计划仅通过培训一个网络来运行。在训练过程中,提出的方法通过从随机梯度下降优化形成的重量轨迹中抽样中间网络参数来生成时间自我启动。使用这些自我归档评估的损失总和用于识别错误标记的样品。同时,我们的方法通过将输入数据转换为各种形式,并考虑其协议以识别错误标记的样本来生成多视图预测。通过结合上述指标,我们介绍了提出的{\ it基于自动化的鲁棒训练}(SRT)方法,该方法可以用嘈杂的标签过滤样品,以减少其对训练的影响。广泛使用的公共数据集的实验表明,所提出的方法在某些类别中实现了最新的性能,而无需训练双网络。
translated by 谷歌翻译
现代深度学习在各个领域取得了巨大的成功。但是,它需要标记大量数据,这是昂贵且劳动密集型的。积极学习(AL)确定要标记的最有用的样本,对于最大化培训过程的效率变得越来越重要。现有的AL方法主要仅使用单个最终固定模型来获取要标记的样品。这种策略可能还不够好,因为没有考虑为给定培训数据的模型的结构不确定性来获取样品。在这项研究中,我们提出了一种基于常规随机梯度下降(SGD)优化产生的时间自我汇总的新颖获取标准。通过捕获通过SGD迭代获得的中间网络权重来获得这些自我复杂模型。我们的收购功能依赖于学生和教师模型之间的一致性度量。为学生模型提供了固定数量的时间自我安装模型,并且教师模型是通过平均学生模型来构建的。使用拟议的获取标准,我们提出了AL算法,即基于学生教师的AL(ST-Conal)。在CIFAR-10,CIFAR-100,CALTECH-256和TINY IMAGENET数据集上进行的图像分类任务进行的实验表明,所提出的ST-Conal实现的性能要比现有的获取方法要好得多。此外,广泛的实验显示了我们方法的鲁棒性和有效性。
translated by 谷歌翻译
人们对从长尾班级分布中学习的具有挑战性的视觉感知任务越来越兴趣。训练数据集中的极端类失衡使模型偏向于识别多数级数据而不是少数级数据。最近,已经提出了两个分支网络的双分支网络(DBN)框架。传统的分支和重新平衡分支用于提高长尾视觉识别的准确性。重新平衡分支使用反向采样器来生成类平衡的训练样本,以减轻由于类不平衡而减轻偏见。尽管该策略在处理偏见方面非常成功,但使用反向采样器进行培训可以降低表示形式的学习绩效。为了减轻这个问题,常规方法使用了精心设计的累积学习策略,在整个培训阶段,重新平衡分支的影响逐渐增加。在这项研究中,我们旨在开发一种简单而有效的方法,以不需要优化的累积学习而在不累积学习的情况下提高DBN的性能。我们设计了一种称为双边混合增强的简单数据增强方法,该方法将统一采样器中的一个样品与反向采样器中的另一个样品结合在一起,以产生训练样本。此外,我们介绍了阶级条件的温度缩放,从而减轻对拟议的DBN结构的多数级别的偏见。我们对广泛使用的长尾视觉识别数据集进行的实验表明,双边混合增加在改善DBN的表示性能方面非常有效,并且所提出的方法可以实现某些类别的先进绩效。
translated by 谷歌翻译
Recognizing the surrounding environment at low latency is critical in autonomous driving. In real-time environment, surrounding environment changes when processing is over. Current detection models are incapable of dealing with changes in the environment that occur after processing. Streaming perception is proposed to assess the latency and accuracy of real-time video perception. However, additional problems arise in real-world applications due to limited hardware resources, high temperatures, and other factors. In this study, we develop a model that can reflect processing delays in real time and produce the most reasonable results. By incorporating the proposed feature queue and feature select module, the system gains the ability to forecast specific time steps without any additional computational costs. Our method is tested on the Argoverse-HD dataset. It achieves higher performance than the current state-of-the-art methods(2022.10) in various environments when delayed . The code is available at https://github.com/danjos95/DADE
translated by 谷歌翻译
We present X-Decoder, a generalized decoding model that can predict pixel-level segmentation and language tokens seamlessly. X-Decodert takes as input two types of queries: (i) generic non-semantic queries and (ii) semantic queries induced from text inputs, to decode different pixel-level and token-level outputs in the same semantic space. With such a novel design, X-Decoder is the first work that provides a unified way to support all types of image segmentation and a variety of vision-language (VL) tasks. Further, our design enables seamless interactions across tasks at different granularities and brings mutual benefits by learning a common and rich pixel-level visual-semantic understanding space, without any pseudo-labeling. After pretraining on a mixed set of a limited amount of segmentation data and millions of image-text pairs, X-Decoder exhibits strong transferability to a wide range of downstream tasks in both zero-shot and finetuning settings. Notably, it achieves (1) state-of-the-art results on open-vocabulary segmentation and referring segmentation on eight datasets; (2) better or competitive finetuned performance to other generalist and specialist models on segmentation and VL tasks; and (3) flexibility for efficient finetuning and novel task composition (e.g., referring captioning and image editing). Code, demo, video, and visualization are available at https://x-decoder-vl.github.io.
translated by 谷歌翻译
Hinged on the representation power of neural networks, neural radiance fields (NeRF) have recently emerged as one of the promising and widely applicable methods for 3D object and scene representation. However, NeRF faces challenges in practical applications, such as large-scale scenes and edge devices with a limited amount of memory, where data needs to be processed sequentially. Under such incremental learning scenarios, neural networks are known to suffer catastrophic forgetting: easily forgetting previously seen data after training with new data. We observe that previous incremental learning algorithms are limited by either low performance or memory scalability issues. As such, we develop a Memory-Efficient Incremental Learning algorithm for NeRF (MEIL-NeRF). MEIL-NeRF takes inspiration from NeRF itself in that a neural network can serve as a memory that provides the pixel RGB values, given rays as queries. Upon the motivation, our framework learns which rays to query NeRF to extract previous pixel values. The extracted pixel values are then used to train NeRF in a self-distillation manner to prevent catastrophic forgetting. As a result, MEIL-NeRF demonstrates constant memory consumption and competitive performance.
translated by 谷歌翻译
Mix-up training approaches have proven to be effective in improving the generalization ability of Deep Neural Networks. Over the years, the research community expands mix-up methods into two directions, with extensive efforts to improve saliency-guided procedures but minimal focus on the arbitrary path, leaving the randomization domain unexplored. In this paper, inspired by the superior qualities of each direction over one another, we introduce a novel method that lies at the junction of the two routes. By combining the best elements of randomness and saliency utilization, our method balances speed, simplicity, and accuracy. We name our method R-Mix following the concept of "Random Mix-up". We demonstrate its effectiveness in generalization, weakly supervised object localization, calibration, and robustness to adversarial attacks. Finally, in order to address the question of whether there exists a better decision protocol, we train a Reinforcement Learning agent that decides the mix-up policies based on the classifier's performance, reducing dependency on human-designed objectives and hyperparameter tuning. Extensive experiments further show that the agent is capable of performing at the cutting-edge level, laying the foundation for a fully automatic mix-up. Our code is released at [https://github.com/minhlong94/Random-Mixup].
translated by 谷歌翻译
Natural language explanations promise to offer intuitively understandable explanations of a neural network's decision process in complex vision-language tasks, as pursued in recent VL-NLE models. While current models offer impressive performance on task accuracy and explanation plausibility, they suffer from a range of issues: Some models feature a modular design where the explanation generation module is poorly integrated with a separate module for task-answer prediction, employ backbone models trained on limited sets of tasks, or incorporate ad hoc solutions to increase performance on single datasets. We propose to evade these limitations by applying recent advances in large-scale multi-task pretraining of generative Transformer models to the problem of VL-NLE tasks. Our approach outperforms recent models by a large margin, with human annotators preferring the generated explanations over the ground truth in two out of three evaluated datasets. As a novel challenge in VL-NLE research, we propose the problem of multi-task VL-NLE and show that jointly training on multiple tasks can increase the explanation quality. We discuss the ethical implications of high-quality NLE generation and other issues in recent VL-NLE research.
translated by 谷歌翻译
While witnessing the noisy intermediate-scale quantum (NISQ) era and beyond, quantum federated learning (QFL) has recently become an emerging field of study. In QFL, each quantum computer or device locally trains its quantum neural network (QNN) with trainable gates, and communicates only these gate parameters over classical channels, without costly quantum communications. Towards enabling QFL under various channel conditions, in this article we develop a depth-controllable architecture of entangled slimmable quantum neural networks (eSQNNs), and propose an entangled slimmable QFL (eSQFL) that communicates the superposition-coded parameters of eS-QNNs. Compared to the existing depth-fixed QNNs, training the depth-controllable eSQNN architecture is more challenging due to high entanglement entropy and inter-depth interference, which are mitigated by introducing entanglement controlled universal (CU) gates and an inplace fidelity distillation (IPFD) regularizer penalizing inter-depth quantum state differences, respectively. Furthermore, we optimize the superposition coding power allocation by deriving and minimizing the convergence bound of eSQFL. In an image classification task, extensive simulations corroborate the effectiveness of eSQFL in terms of prediction accuracy, fidelity, and entropy compared to Vanilla QFL as well as under different channel conditions and various data distributions.
translated by 谷歌翻译
Our long term goal is to use image-based depth completion to quickly create 3D models from sparse point clouds, e.g. from SfM or SLAM. Much progress has been made in depth completion. However, most current works assume well distributed samples of known depth, e.g. Lidar or random uniform sampling, and perform poorly on uneven samples, such as from keypoints, due to the large unsampled regions. To address this problem, we extend CSPN with multiscale prediction and a dilated kernel, leading to much better completion of keypoint-sampled depth. We also show that a model trained on NYUv2 creates surprisingly good point clouds on ETH3D by completing sparse SfM points.
translated by 谷歌翻译